107 research outputs found
The Internet AS-Level Topology: Three Data Sources and One Definitive Metric
We calculate an extensive set of characteristics for Internet AS topologies
extracted from the three data sources most frequently used by the research
community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP
topologies are similar to one another but differ substantially from the WHOIS
topology. Among the widely considered metrics, we find that the joint degree
distribution appears to fundamentally characterize Internet AS topologies as
well as narrowly define values for other important metrics. We discuss the
interplay between the specifics of the three data collection mechanisms and the
resulting topology views. In particular, we show how the data collection
peculiarities explain differences in the resulting joint degree distributions
of the respective topologies. Finally, we release to the community the input
topology datasets, along with the scripts and output of our calculations. This
supplement should enable researchers to validate their models against real data
and to make more informed selection of topology data sources for their specific
needs.Comment: This paper is a revised journal version of cs.NI/050803
On Compact Routing for the Internet
While there exist compact routing schemes designed for grids, trees, and
Internet-like topologies that offer routing tables of sizes that scale
logarithmically with the network size, we demonstrate in this paper that in
view of recent results in compact routing research, such logarithmic scaling on
Internet-like topologies is fundamentally impossible in the presence of
topology dynamics or topology-independent (flat) addressing. We use analytic
arguments to show that the number of routing control messages per topology
change cannot scale better than linearly on Internet-like topologies. We also
employ simulations to confirm that logarithmic routing table size scaling gets
broken by topology-independent addressing, a cornerstone of popular
locator-identifier split proposals aiming at improving routing scaling in the
presence of network topology dynamics or host mobility. These pessimistic
findings lead us to the conclusion that a fundamental re-examination of
assumptions behind routing models and abstractions is needed in order to find a
routing architecture that would be able to scale ``indefinitely.''Comment: This is a significantly revised, journal version of cs/050802
GT: Picking up the Truth from the Ground for Internet Traffic
Much of Internet traffic modeling, firewall, and intrusion detection research requires traces where some ground truth regarding application and protocol is associated with each packet or flow. This paper presents the design, development and experimental evaluation of gt, an open source software toolset for associating ground truth information with Internet traffic traces. By probing the monitored host's kernel to obtain information on active Internet sessions, gt gathers ground truth at the application level. Preliminary exper- imental results show that gt's effectiveness comes at little cost in terms of overhead on the hosting machines. Furthermore, when coupled with other packet inspection mechanisms, gt can derive ground truth not only in terms of applications (e.g., e-mail), but also in terms of protocols (e.g., SMTP vs. POP3
Weighted network modules
The inclusion of link weights into the analysis of network properties allows
a deeper insight into the (often overlapping) modular structure of real-world
webs. We introduce a clustering algorithm (CPMw, Clique Percolation Method with
weights) for weighted networks based on the concept of percolating k-cliques
with high enough intensity. The algorithm allows overlaps between the modules.
First, we give detailed analytical and numerical results about the critical
point of weighted k-clique percolation on (weighted) Erdos-Renyi graphs. Then,
for a scientist collaboration web and a stock correlation graph we compute
three-link weight correlations and with the CPMw the weighted modules. After
reshuffling link weights in both networks and computing the same quantities for
the randomised control graphs as well, we show that groups of 3 or more strong
links prefer to cluster together in both original graphs.Comment: 19 pages, 7 figure
Mapping peering interconnections to a facility
Annotating Internet interconnections with robust physical coordinates at the level of a building facilitates network management including interdomain troubleshooting, but also has practical value for helping to locate points of attacks, congestion, or instability on the Internet. But, like most other aspects of Internet interconnection, its geophysical locus is generally not public; the facility used for a given link must be inferred to construct a macroscopic map of peering. We develop a methodology, called constrained facility search, to infer the physical interconnection facility where an interconnection occurs among all possible candidates. We rely on publicly available data about the presence of networks at different facilities, and execute traceroute measurements from more than 8,500 available measurement servers scattered around the world to identify the technical approach used to establish an interconnection. A key insight of our method is that inference of the technical approach for an interconnection sufficiently constrains the number of candidate facilities such that it is often possible to identify the specific facility where a given interconnection occurs. Validation via private communication with operators confirms the accuracy of our method, which outperforms heuristics based on naming schemes and IP geolocation. Our study also reveals the multiple roles that routers play at interconnection facilities; in many cases the same router implements both private interconnections and public peerings, in some cases via multiple Internet exchange points. Our study also sheds light on peering engineering strategies used by different types of networks around the globe
Fair Quality of Experience (QoE) Measurements Related with Networking Technologies
[Invited Talk] Eighth International Conference on Wired/Wireless Internet Communications (June 1-3, Luleå, Sweden)Proceeding of: 8th International Conference, WWIC 2010, Lulea, Sweden, June 1-3, 2010This paper addresses the topic of Fair QoE measurements in networking. The research of new solutions in networking is oriented to improve the user experience. Any application or service can be im- proved and the deployment of new solutions is mandatory to get the user satisfaction. However, different solutions exist; thus, it is necessary to select the most suitable ones. Nevertheless, this selection is difficult to make since the QoE is subjective and the comparison among different technologies is not trivial. The aim of this paper is to give an overview on how to perform fair QoE measurements to facilitate the study and re- search of new networking solutions and paradigms. However, previously to address this problem, an overview about how networking affects to the QoE is provided.This work has been funded by the CONTENT NoE from the European Commission (FP6- 2005-IST-41) and by the Ministry of Science and Innovation under the CON- PARTE project (MEC, TEC2007-67966-C03-03/TCM) and T2C2 project grant (TIN2008-06739-C04-01).Publicad
Hyperbolic Geometry of Complex Networks
We develop a geometric framework to study the structure and function of
complex networks. We assume that hyperbolic geometry underlies these networks,
and we show that with this assumption, heterogeneous degree distributions and
strong clustering in complex networks emerge naturally as simple reflections of
the negative curvature and metric property of the underlying hyperbolic
geometry. Conversely, we show that if a network has some metric structure, and
if the network degree distribution is heterogeneous, then the network has an
effective hyperbolic geometry underneath. We then establish a mapping between
our geometric framework and statistical mechanics of complex networks. This
mapping interprets edges in a network as non-interacting fermions whose
energies are hyperbolic distances between nodes, while the auxiliary fields
coupled to edges are linear functions of these energies or distances. The
geometric network ensemble subsumes the standard configuration model and
classical random graphs as two limiting cases with degenerate geometric
structures. Finally, we show that targeted transport processes without global
topology knowledge, made possible by our geometric framework, are maximally
efficient, according to all efficiency measures, in networks with strongest
heterogeneity and clustering, and that this efficiency is remarkably robust
with respect to even catastrophic disturbances and damages to the network
structure
Inferring persistent interdomain congestion
There is significant interest in the technical and policy communities regarding the extent, scope, and consumer harm of persistent interdomain congestion. We provide empirical grounding for discussions of interdomain congestion by developing a system and method to measure congestion on thousands of interdomain links without direct access to them. We implement a system based on the Time Series Latency Probes (TSLP) technique that identifies links with evidence of recurring congestion suggestive of an under-provisioned link. We deploy our system at 86 vantage points worldwide and show that congestion inferred using our lightweight TSLP method correlates with other metrics of interconnection performance impairment. We use our method to study interdomain links of eight large U.S. broadband access providers from March 2016 to December 2017, and validate our inferences against ground-truth traffic statistics from two of the providers. For the period of time over which we gathered measurements, we did not find evidence of widespread endemic congestion on interdomain links between access ISPs and directly connected transit and content providers, although some such links exhibited recurring congestion patterns. We describe limitations, open challenges, and a path toward the use of this method for large-scale third-party monitoring of the Internet interconnection ecosystem
Snazer: the simulations and networks analyzer
<p>Abstract</p> <p>Background</p> <p>Networks are widely recognized as key determinants of structure and function in systems that span the biological, physical, and social sciences. They are static pictures of the interactions among the components of complex systems. Often, much effort is required to identify networks as part of particular patterns as well as to visualize and interpret them.</p> <p>From a pure dynamical perspective, simulation represents a relevant <it>way</it>-<it>out</it>. Many simulator tools capitalized on the "noisy" behavior of some systems and used formal models to represent cellular activities as temporal trajectories. Statistical methods have been applied to a fairly large number of replicated trajectories in order to infer knowledge.</p> <p>A tool which both graphically manipulates reactive models and deals with sets of simulation time-course data by aggregation, interpretation and statistical analysis is missing and could add value to simulators.</p> <p>Results</p> <p>We designed and implemented <it>Snazer</it>, the simulations and networks analyzer. Its goal is to aid the processes of visualizing and manipulating reactive models, as well as to share and interpret time-course data produced by stochastic simulators or by any other means.</p> <p>Conclusions</p> <p><it>Snazer </it>is a solid prototype that integrates biological network and simulation time-course data analysis techniques.</p
DTS: a Decentralized Tracing System
peer reviewedA new generation of widely distributed systems to measure the Internet topology at the interface level is currently being deployed. Cooperation between monitors in these systems is required in order to avoid over-consumption of network resources. This paper proposes an architecture for a distributed topology measurement (DTM) system that, for the first time, decentralizes probing information. The key idea of our proposal is that, by utilizing a shared database as a communication method among monitors and taking advantage of the characteristics of the Doubletree algorithm, we can get rid of a specific control point, and a DTM system can be constructed in a decentralized manner. In this paper, we describe our implementation of a DTM, called Decentralized Tracing System (DTS). Decentralization within DTS is achieved using various distributed hash tables (DHTs), each one being dedicated to a particular plane (i.e., control or data). We also provide preliminary evaluation results
- …